auxiliary class
Self-Supervised Generalisation with Meta Auxiliary Learning
Shikun Liu, Andrew Davison, Edward Johns
We showthatourproposedmethod,MetaAuXiliaryLearning(MAXL),outperforms single-task learning on 7 image datasets, without requiring any additional data. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. Source code can be found at https://github.com/lorenmt/maxl.
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > Newfoundland and Labrador > Labrador (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > Texas (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > Newfoundland and Labrador > Labrador (0.04)
Auxiliary Learning as a step towards Artificial General Intelligence
Auxiliary Learning is a machine learning approach in which the model acknowledges the existence of objects that do not come under any of its learned categories.The name Auxiliary learning was chosen due to the introduction of an auxiliary class. The paper focuses on increasing the generality of existing narrow purpose neural networks and also highlights the need to handle unknown objects. The Cat & Dog binary classifier is taken as an example throughout the paper.
Self-Supervised Generalisation with Meta Auxiliary Learning
Liu, Shikun, Davison, Andrew J., Johns, Edward
Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manually-labelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network's performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at \url{https://github.com/lorenmt/maxl}.
- North America > Canada > Newfoundland and Labrador > Labrador (0.04)
- North America > United States > Texas (0.04)
- North America > Canada > Quebec > Montreal (0.04)